These underground markets that deal with malicious large language models (LLMs) are called Mallas. This blog dives into the details of this dark industry and discusses the impact of these illicit LLMs on cybersecurity.
LLMs, like OpenAI' GPT-4 have shown fine results in natural language processing, bringing applications like chatbots for content generation. However, the same tech that supports these useful apps can be misused for suspicious activities.
Recently, researchers from Indian University Bloomington found 212 malicious LLMs on underground marketplaces between April and September last year. One of the models "WormGPT" made around $28,000 in just two months, revealing a trend among threat actors misusing AI and a rising demand for these harmful tools.
Various LLMs in the market were uncensored and built using open-source standards, few were jailbroken commercial models. Threat actors used Mallas to write phishing emails, build malware, and exploit zero days.
Tech giants working in the AI models industry have built measures to protect against jailbreaking and detecting malicious attempts. But threat actors have also found ways to jump the guardrails and trick AI models like Google Meta, OpenAI, and Anthropic into providing malicious info.
Experts found two uncensored LLMs: DarkGPT, which costs 78 cents per 50 messages, and Escape GPT, a subscription model that charges $64.98 a month. Both models generate harmful code that antivirus tools fail to detect two-thirds of the time. Another model "WolfGPT" costs $150, and allows users to write phishing emails that can escape most spam detectors.
The research findings suggest all harmful AI models could make malware, and 41.5% could create phishing emails. These models were built upon OpenAI's GPT-3.5 and GPT-4, Claude Instant, Claude-2-100k, and Pygmalion 13B.
To fight these threats, experts have suggested a dataset of prompts used to make malware and escape safety features. AI companies should release models with default censorship settings and allow access to illicit models only for research purposes.
Despite all the talk of generative AI disrupting the world, the technology has failed to significantly transform white-collar jobs. Workers are experimenting with chatbots for activities like email drafting, and businesses are doing numerous experiments, but office work has yet to experience a big AI overhaul.
That could be because we haven't given chatbots like Google's Gemini and OpenAI's ChatGPT the proper capabilities yet; they're typically limited to taking in and spitting out text via a chat interface.
Things may become more fascinating in commercial settings when AI businesses begin to deploy so-called "AI agents," which may perform actions by running other software on a computer or over the internet.
Anthropic, a rival of OpenAI, unveiled a big new product today that seeks to establish the notion that tool use is required for AI's next jump in usefulness. The business is allowing developers to instruct its chatbot Claude to use external services and software to complete more valuable tasks.
Claude can, for example, use a calculator to solve math problems that vex big language models; be asked to visit a database storing customer information; or be forced to use other programs on a user's computer when it would be beneficial.
Anthropic has been assisting various companies in developing Claude-based aides for their employees. For example, the online tutoring business Study Fetch has created a means for Claude to leverage various platform tools to customize the user interface and syllabus content displayed to students.
Other businesses are also joining the AI Stone Age. At its I/O developer conference earlier this month, Google showed off a few prototype AI agents, among other new AI features. One of the agents was created to handle online shopping returns by searching for the receipt in the customer's Gmail account, completing the return form, and scheduling a package pickup.
The Stone Age of chatbots represents a significant leap forward. Here’s what we can expect:
Microsoft introduced Copilot – its workplace assistant – earlier this year, labelling the product as a “copilot for work.”
Copilot which will be made available for the users from November 1, will be integrated to the subscribers of Microsoft 365 apps such as Word, Excel, Teams and PowerPoint – with a subscription worth $30 per user/month.
Additionally, as part of the new service, employees at companies who use Microsoft's Copilot could theoretically send their AI helpers to meetings in their place, allowing them to miss or double-book appointments and focus on other tasks.
With businesses including General Motors, KPMG, and Goodyear, Microsoft has been testing Copilot, which assists users with tasks like email writing and coding. Early feedback from those companies has revealed that it is used to swiftly respond to emails and inquire about meetings.
According to Jared Spataro, corporate vice president of modern work and business applications at Microsoft, “[Copilot] combines the power of large language models (LLMs) with your data…to turn your words into the most powerful productivity tool on the planet,” he said in a March blog post.
Spataro promised that the technology would “lighten the load” for online users, stating that for many white-collar workers, “80% of our time is consumed with busywork that bogs us down.”
For many office workers, this so-called "busywork" includes attending meetings. According to a recent British study, office workers waste 213 hours annually, or 27 full working days, in meetings where the agenda could have been communicated by email.
Companies like Shopify are deliberately putting a stop to pointless meetings. When the e-commerce giant introduced an internal "cost calculator" for staff meetings, it made headlines during the summer. According to corporate leadership, each 30-minute meeting costs the company between $700 and $1,600.
Copilot will now help in reducing this expense. The AI assistant's services include the ability to "follow" meetings and produce a transcript, summary, and notes once they are over.
Microsoft, in July, noted that “the next wave of generative AI for Teams,” which included incorporating Copilot further into Teams calls and meetings.
“You can also ask Copilot to draft notes for you during the call and highlight key points, such as names, dates, numbers, and tasks using natural language commands[…]You can quickly synthesize key information from your chat threads—allowing you to ask specific questions (or use one of the suggested prompts) to help get caught up on the conversation so far, organize key discussion points, and summarize information relevant to you,” the company noted.
In regard to the same, Spataro states that “Every meeting is a productive meeting with Copilot in Teams[…]It can summarize key discussion points—including who said what and where people are aligned and where they disagree—and suggest action items, all in real-time during a meeting.
However, Microsoft is not the only tech giant working on making meeting tolerant, as Zoom and Google have also introduced AI-powered chatbots for the online workforce that can attend meetings on behalf of the user, and present its conclusions during the get-together.
The warning letter was issued the same day Meta revealed their plans to incorporate chatbots powered by AI into their sponsored apps, i.e. WhatsApp, Messenger, and Instagram.
In the letter, Markey wrote to Meta CEO Mark Zuckerberg that, “These chatbots could create new privacy harms and exacerbate those already prevalent on your platforms, including invasive data collection, algorithmic discrimination, and manipulative advertisements[…]I strongly urge you to pause the release of any AI chatbots until Meta understands the effect that such products will have on young users.”
According to Markey, the algorithms have already “caused serious harms,” to customers, like “collecting and storing detailed personal information[…]facilitating housing discrimination against communities of color.”
He added that while chatbots can benefit people, they also possess certain risks. He further highlighted the risk of chatbots, noting the possibility that they could identify the difference between ads and content.
“Young users may not realize that a chatbot’s response is actually advertising for a product or service[…]Generative AI also has the potential to adapt and target advertising to an 'audience of one,' making ads even more difficult for young users to identify,” states Markey.
Markey also noted that chatbots might also make social media platforms more “addictive” to the users (than they already are).
“By creating the appearance of chatting with a real person, chatbots may significantly expand users’ -- especially younger users’ – time on the platform, allowing the platform to collect more of their personal information and profit from advertising,” he wrote. “With chatbots threatening to supercharge these problematic practices, Big Tech companies, such as Meta, should abandon this 'move fast and break things' ethos and proceed with the utmost caution.”
The lawmaker is now asking Meta to respond to a series of questions in regards to their new chatbots, including the ones that might have an impact on users’ privacy and advertising.
Moreover, the questions include a detailed insight into the roles of chatbots when it comes to data collection and whether Meta will commit not to use any information gleaned from them to target advertisements for their young users. Markey inquired about the possibility of adverts being integrated into the chatbots and, if so, how Meta intends to prevent those ads from confusing children.
In their response, a Meta spokesperson has confirmed that the company has indeed received the said letter.
Meta further notes in a blog post that it is working in collaboration with the government and other entities “to establish responsible guardrails,” and is training the chatbots with consideration to safety. For instance, Meta writes, the tools “will suggest local suicide and eating disorder organizations in response to certain queries, while making it clear that it cannot provide medical advice.”
ChatGPT is a large language model (LLM) from OpenAI that can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. It is still under development, but it has already been used for a variety of purposes, including creative writing, code generation, and research.
However, ChatGPT also poses some security and privacy risks. These risks are highlighted in the following articles:
Overall, ChatGPT is a powerful tool with a number of potential benefits. However, it is important to be aware of the security and privacy risks associated with using it. Users should carefully consider the instructions they give to ChatGPT and only use trusted plugins. They should also be careful about what websites and web applications they authorize ChatGPT to access.
Here are some additional tips for using ChatGPT safely:
Artificial intelligence (AI) is rapidly transforming healthcare, with the potential to revolutionize the way we diagnose, treat, and manage diseases. However, as with any emerging technology, there are also ethical concerns that need to be addressed.
AI systems are often complex and opaque, making it difficult to understand how they work and make decisions. This lack of transparency can make it difficult to hold AI systems accountable for their actions. For example, if an AI system makes a mistake that harms a patient, it may be difficult to determine who is responsible and what steps can be taken to prevent similar mistakes from happening in the future.
AI systems are trained on data, and if that data is biased, the AI system will learn to be biased as well. This could lead to AI systems making discriminatory decisions about patients, such as denying them treatment or recommending different treatments based on their race, ethnicity, or socioeconomic status.
AI systems collect and store large amounts of personal data about patients. This data needs to be protected from unauthorized access and use. If patient data is compromised, it could be used for identity theft, fraud, or other malicious purposes.
AI systems could potentially make decisions about patients' care without their consent. This raises concerns about patient autonomy and informed consent. Patients should have a right to understand how AI is being used to make decisions about their care and to opt out of AI-based care if they choose.
Guidelines for Addressing Ethical Issues:
In addition to the aforementioned factors, it's critical to be mindful of how AI could exacerbate already-existing healthcare disparities. AI systems might be utilized, for instance, to create novel medicines that are only available to wealthy patients. Alternatively, AI systems might be applied to target vulnerable people for the marketing of healthcare goods and services.